103 research outputs found

    Review of control strategies for robotic movement training after neurologic injury

    Get PDF
    There is increasing interest in using robotic devices to assist in movement training following neurologic injuries such as stroke and spinal cord injury. This paper reviews control strategies for robotic therapy devices. Several categories of strategies have been proposed, including, assistive, challenge-based, haptic simulation, and coaching. The greatest amount of work has been done on developing assistive strategies, and thus the majority of this review summarizes techniques for implementing assistive strategies, including impedance-, counterbalance-, and EMG- based controllers, as well as adaptive controllers that modify control parameters based on ongoing participant performance. Clinical evidence regarding the relative effectiveness of different types of robotic therapy controllers is limited, but there is initial evidence that some control strategies are more effective than others. It is also now apparent there may be mechanisms by which some robotic control approaches might actually decrease the recovery possible with comparable, non-robotic forms of training. In future research, there is a need for head-to-head comparison of control algorithms in randomized, controlled clinical trials, and for improved models of human motor recovery to provide a more rational framework for designing robotic therapy control strategies

    The effect of haptic guidance, aging, and initial skill level on motor learning of a steering task

    Get PDF
    In a previous study, we found that haptic guidance from a robotic steering wheel can improve short-term learning of steering of a simulated vehicle, in contrast to several studies of other tasks that had found that the guidance either impairs or does not aid motor learning. In this study, we examined whether haptic guidance-as-needed can improve long-term retention (across 1 week) of the steering task, with age and initial skill level as independent variables. Training with guidance-as-needed allowed all participants to learn to steer without experiencing large errors. For young participants (age 18–30), training with guidance-as-needed produced better long-term retention of driving skill than did training without guidance. For older participants (age 65–92), training with guidance-as-needed improved long-term retention in tracking error, but not significantly. However, for a subset of less skilled, older subjects, training with guidance-as-needed significantly improved long-term retention. The benefits of guidance-based training were most evident as an improved ability to straighten the vehicle direction when coming out of turns. In general, older participants not only systematically performed worse at the task than younger subjects (errors ∌3 times greater), but also apparently learned more slowly, forgetting a greater percentage of the learned task during the 1 week layoffs between the experimental sessions. This study demonstrates that training with haptic guidance can benefit long-term retention of a driving skill for young and for some old drivers. Training with haptic guidance is more useful for people with less initial skill

    Enhancing touch sensibility by sensory retraining in a sensory discrimination task via haptic rendering

    Get PDF
    Stroke survivors are commonly affected by somatosensory impairment, hampering their ability to interpret somatosensory information. Somatosensory information has been shown to critically support movement execution in healthy individuals and stroke survivors. Despite the detrimental effect of somatosensory impairments on performing activities of daily living, somatosensory training—in stark contrast to motor training—does not represent standard care in neurorehabilitation. Reasons for the neglected somatosensory treatment are the lack of high-quality research demonstrating the benefits of somatosensory interventions on stroke recovery, the unavailability of reliable quantitative assessments of sensorimotor deficits, and the labor-intensive nature of somatosensory training that relies on therapists guiding the hands of patients with motor impairments. To address this clinical need, we developed a virtual reality-based robotic texture discrimination task to assess and train touch sensibility. Our system incorporates the possibility to robotically guide the participants' hands during texture exploration (i.e., passive touch) and no-guided free texture exploration (i.e., active touch). We ran a 3-day experiment with thirty-six healthy participants who were asked to discriminate the odd texture among three visually identical textures –haptically rendered with the robotic device– following the method of constant stimuli. All participants trained with the passive and active conditions in randomized order on different days. We investigated the reliability of our system using the Intraclass Correlation Coefficient (ICC). We also evaluated the enhancement of participants' touch sensibility via somatosensory retraining and compared whether this enhancement differed between training with active vs. passive conditions. Our results showed that participants significantly improved their task performance after training. Moreover, we found that training effects were not significantly different between active and passive conditions, yet, passive exploration seemed to increase participants' perceived competence. The reliability of our system ranged from poor (in active condition) to moderate and good (in passive condition), probably due to the dependence of the ICC on the between-subject variability, which in a healthy population is usually small. Together, our virtual reality-based robotic haptic system may be a key asset for evaluating and retraining sensory loss with minimal supervision, especially for brain-injured patients who require guidance to move their hands

    Enhancing touch sensibility by sensory retraining in a sensory discrimination task via haptic rendering

    Get PDF
    Stroke survivors are commonly affected by somatosensory impairment, hampering their ability to interpret somatosensory information. Somatosensory information has been shown to critically support movement execution in healthy individuals and stroke survivors. Despite the detrimental effect of somatosensory impairments on performing activities of daily living, somatosensory training—in stark contrast to motor training—does not represent standard care in neurorehabilitation. Reasons for the neglected somatosensory treatment are the lack of high-quality research demonstrating the benefits of somatosensory interventions on stroke recovery, the unavailability of reliable quantitative assessments of sensorimotor deficits, and the labor-intensive nature of somatosensory training that relies on therapists guiding the hands of patients with motor impairments. To address this clinical need, we developed a virtual reality-based robotic texture discrimination task to assess and train touch sensibility. Our system incorporates the possibility to robotically guide the participants' hands during texture exploration (i.e., passive touch) and no-guided free texture exploration (i.e., active touch). We ran a 3-day experiment with thirty-six healthy participants who were asked to discriminate the odd texture among three visually identical textures –haptically rendered with the robotic device– following the method of constant stimuli. All participants trained with the passive and active conditions in randomized order on different days. We investigated the reliability of our system using the Intraclass Correlation Coefficient (ICC). We also evaluated the enhancement of participants' touch sensibility via somatosensory retraining and compared whether this enhancement differed between training with active vs. passive conditions. Our results showed that participants significantly improved their task performance after training. Moreover, we found that training effects were not significantly different between active and passive conditions, yet, passive exploration seemed to increase participants' perceived competence. The reliability of our system ranged from poor (in active condition) to moderate and good (in passive condition), probably due to the dependence of the ICC on the between-subject variability, which in a healthy population is usually small. Together, our virtual reality-based robotic haptic system may be a key asset for evaluating and retraining sensory loss with minimal supervision, especially for brain-injured patients who require guidance to move their hands

    Haptic Error Modulation Outperforms Visual Error Amplification When Learning a Modified Gait Pattern

    Get PDF
    Robotic algorithms that augment movement errors have been proposed as promising training strategies to enhance motor learning and neurorehabilitation. However, most research effort has focused on rehabilitation of upper limbs, probably because large movement errors are especially dangerous during gait training, as they might result in stumbling and falling. Furthermore, systematic large movement errors might limit the participants’ motivation during training. In this study, we investigated the effect of training with novel error modulating strategies, which guarantee a safe training environment, on motivation and learning of a modified asymmetric gait pattern. Thirty healthy young participants walked in the exoskeletal robotic system Lokomat while performing a foot target-tracking task, which required an increased hip and knee flexion in the dominant leg. Learning the asymmetric gait pattern with three different strategies was evaluated: (i) No disturbance: no robot disturbance/guidance was applied, (ii) haptic error amplification: unsafe and discouraging large errors were limited with haptic guidance, while haptic error amplification enhanced awareness of small errors relevant for learning, and (iii) visual error amplification: visually observed errors were amplified in a virtual reality environment. We also evaluated whether increasing the movement variability during training by adding randomly varying haptic disturbances on top of the other training strategies further enhances learning. We analyzed participants’ motor performance and self-reported intrinsic motivation before, during and after training. We found that training with the novel haptic error amplification strategy did not hamper motor adaptation and enhanced transfer of the practiced asymmetric gait pattern to free walking. Training with visual error amplification, on the other hand, increased errors during training and hampered motor learning. Participants who trained with visual error amplification also reported a reduced perceived competence. Adding haptic disturbance increased the movement variability during training, but did not have a significant effect on motor adaptation, probably because training with haptic disturbance on top of visual and haptic error amplification decreased the participants’ feelings of competence. The proposed novel haptic error modulating controller that amplifies small task-relevant errors while limiting large errors outperformed visual error augmentation and might provide a promising framework to improve robotic gait training outcomes in neurological patients

    Effect of Error Augmentation on Brain Activation and Motor Learning of a Complex Locomotor Task

    Get PDF
    Up to date, the functional gains obtained after robot-aided gait rehabilitation training are limited. Error augmenting strategies have a great potential to enhance motor learning of simple motor tasks. However, little is known about the effect of these error modulating strategies on complex tasks, such as relearning to walk after a neurologic accident. Additionally, neuroimaging evaluation of brain regions involved in learning processes could provide valuable information on behavioral outcomes. We investigated the effect of robotic training strategies that augment errors—error amplification and random force disturbance—and training without perturbations on brain activation and motor learning of a complex locomotor task. Thirty-four healthy subjects performed the experiment with a robotic stepper (MARCOS) in a 1.5 T MR scanner. The task consisted in tracking a Lissajous figure presented on a display by coordinating the legs in a gait-like movement pattern. Behavioral results showed that training without perturbations enhanced motor learning in initially less skilled subjects, while error amplification benefited better-skilled subjects. Training with error amplification, however, hampered transfer of learning. Randomly disturbing forces induced learning and promoted transfer in all subjects, probably because the unexpected forces increased subjects' attention. Functional MRI revealed main effects of training strategy and skill level during training. A main effect of training strategy was seen in brain regions typically associated with motor control and learning, such as, the basal ganglia, cerebellum, intraparietal sulcus, and angular gyrus. Especially, random disturbance and no perturbation lead to stronger brain activation in similar brain regions than error amplification. Skill-level related effects were observed in the IPS, in parts of the superior parietal lobe (SPL), i.e., precuneus, and temporal cortex. These neuroimaging findings indicate that gait-like motor learning depends on interplay between subcortical, cerebellar, and fronto-parietal brain regions. An interesting observation was the low activation observed in the brain's reward system after training with error amplification compared to training without perturbations. Our results suggest that to enhance learning of a locomotor task, errors should be augmented based on subjects' skill level. The impacts of these strategies on motor learning, brain activation, and motivation in neurological patients need further investigation

    Naturalistic visualization of reaching movements using head-mounted displays improves movement quality compared to conventional computer screens and proves high usability.

    Get PDF
    BACKGROUND The relearning of movements after brain injury can be optimized by providing intensive, meaningful, and motivating training using virtual reality (VR). However, most current solutions use two-dimensional (2D) screens, where patients interact via symbolic representations of their limbs (e.g., a cursor). These 2D screens lack depth cues, potentially deteriorating movement quality and increasing cognitive load. Head-mounted displays (HMDs) have great potential to provide naturalistic movement visualization by incorporating improved depth cues, reduce visuospatial transformations by rendering movements in the space where they are performed, and preserve eye-hand coordination by showing an avatar-with immersive VR (IVR)-or the user's real body-with augmented reality (AR). However, elderly populations might not find these novel technologies usable, hampering potential motor and cognitive benefits. METHODS We compared movement quality, cognitive load, motivation, and system usability in twenty elderly participants (>59 years old) while performing a dual motor-cognitive task with different visualization technologies: IVR HMD, AR HMD, and a 2D screen. We evaluated participants' self-reported cognitive load, motivation, and usability using questionnaires. We also conducted a pilot study with five brain-injured patients comparing the visualization technologies while using an assistive device. RESULTS Elderly participants performed straighter, shorter duration, and smoother movements when the task was visualized with the HMDs than screen. The IVR HMD led to shorter duration movements than AR. Movement onsets were shorter with IVR than AR, and shorter for both HMDs than the screen, potentially indicating facilitated reaction times due to reduced cognitive load. No differences were found in the questionnaires regarding cognitive load, motivation, or usability between technologies in elderly participants. Both HMDs proved high usability in our small sample of patients. CONCLUSIONS HMDs are a promising technology to be incorporated into neurorehabilitation, as their more naturalistic movement visualization improves movement quality compared to conventional screens. HMDs demonstrate high usability, without decreasing participants' motivation, and might potentially lower cognitive load. Our preliminary clinical results suggest that brain-injured patients may especially benefit from more immersive technologies. However, larger patient samples are needed to draw stronger conclusions.*

    First steps towards accelerating the learning of using exoskeletons with immersive virtual reality

    Get PDF
    Learning to use a lower-limb wearable exoskeleton for people with spinal cord injury is time-consuming and requires effort from the user and extensive therapists’ time. In this study, we aim at exploiting visual feedback through immersive virtual reality using a head-mounted display to accelerate motor learning for the purpose of using a wearable exoskeleton with minimal supervision.Peer ReviewedPostprint (published version

    Embodiment of virtual feet correlates with motor performance in a target-stepping task:a pilot study

    Get PDF
    Immersive Virtual Reality (IVR) has gained popularity in neurorehabilitation for its potential to increase patients’ motivation and engagement. A crucial yet relatively unexplored aspect of IVR interfaces is the patients’ representation in the virtual world, such as with an avatar. A higher level of embodiment over avatars has been shown to enhance motor performance during upper limb training and has the potential to be employed to enhance neurorehabilitation. However, the relationship between avatar embodiment and gait performance remains unexplored. In this work, we present the results of a pilot study with 12 healthy young participants that evaluates the effect of different virtual lower limb representations on foot placement accuracy while stepping over a trail of 16 virtual targets. We compared three levels of virtual representation: i) a full-body avatar, ii) only feet, and iii) no representation. Full-body tracking is computed using standard VR trackers to synchronize the avatar with the participants’ motions. Foot placement accuracy is measured as the distance between the foot’s center of mass and the center of the selected virtual target. Additionally, we evaluated the level of embodiment over each virtual representation through a questionnaire. Our findings indicate that foot placement accuracy increases with some form of virtual representation, either full-body or foot, compared to having no virtual representation. However, the foot and full-body representations do not show significant differences in accuracy. Importantly, we found a negative correlation between the level of embodiment of the foot representation and the distance between the placed foot and the target. However, no such correlation was found for the full-body representation. Our results highlight the importance of embodying a virtual representation of the foot when performing a task that requires accurate foot placement. However, showing a full-body avatar does not appear to further enhance accuracy. Moreover, our results suggest that the level of embodiment of the virtual feet might modulate motor performance in this stepping task. This work motivates future research on the effect of embodiment over virtual representations on motor control to be exploited for IVR gait rehabilitation.Human-Robot Interactio

    A reconfigurable, tendon-based haptic interface for research into human-environment interactions

    Get PDF
    Human reaction to external stimuli can be investigated in a comprehensive way by using a versatile virtual-reality setup involving multiple display technologies. It is apparent that versatility remains a main challenge when human reactions are examined through the use of haptic interfaces as the interfaces must be able to cope with the entire range of diverse movements and forces/torques a human subject produces. To address the versatility challenge, we have developed a large-scale reconfigurable tendon-based haptic interface which can be adapted to a large variety of task dynamics and is integrated into a Cave Automatic Virtual Environment (CAVE). To prove the versatility of the haptic interface, two tasks, incorporating once the force and once the velocity extrema of a human subject's extremities, were implemented: a simulator with 3-DOF highly dynamic force feedback and a 3-DOF setup optimized to perform dynamic movements. In addition, a 6-DOF platform capable of lifting a human subject off the ground was realized. For these three applications, a position controller was implemented, adapted to each task, and tested. In the controller tests with highly different, task-specific trajectories, the three robot configurations fulfilled the demands on the application-specific accuracy which illustrates and confirms the versatility of the developed haptic interfac
    • 

    corecore